169 research outputs found

    Connectivity measures for internet topologies.

    Get PDF
    The topology of the Internet has initially been modelled as an undirected graph, where vertices correspond to so-called Autonomous Systems (ASs),and edges correspond to physical links between pairs of ASs. However, in order to capture the impact of routing policies, it has recently become apparent that one needs to classify the edges according to the existing economic relationships (customer-provider, peer-to-peer or siblings) between the ASs. This leads to a directed graph model in which traffic can be sent only along so-called valley-free paths. Four different algorithms have been proposed in the literature for inferring AS relationships using publicly available data from routing tables. We investigate the differences in the graph models produced by these algorithms, focussing on connectivity measures. To this aim, we compute the maximum number of vertex-disjoint valley-free paths between ASs as well as the size of a minimum cut separating a pair of ASs. Although these problems are solvable in polynomial time for ordinary graphs, they are NP-hard in our setting. We formulate the two problems as integer programs, and we propose a number of exact algorithms for solving them. For the problem of finding the maximum number of vertex-disjoint paths, we discuss two algorithms; the first one is a branch-and-price algorithm based on the IP formulation, and the second algorithm is a non LP based branch-and-bound algorithm. For the problem of finding minimum cuts we use a branch-and-cut algo rithm, based on the IP formulation of this problem. Using these algorithms, we obtain exact solutions for both problems in reasonable time. It turns out that there is a large gap in terms of the connectivity measures between the undirected and directed models. This finding supports our conclusion that economic relationships need to be taken into account when building a topology of the Internet.Research; Internet;

    Online Independent Set Beyond the Worst-Case: Secretaries, Prophets, and Periods

    Full text link
    We investigate online algorithms for maximum (weight) independent set on graph classes with bounded inductive independence number like, e.g., interval and disk graphs with applications to, e.g., task scheduling and spectrum allocation. In the online setting, it is assumed that nodes of an unknown graph arrive one by one over time. An online algorithm has to decide whether an arriving node should be included into the independent set. Unfortunately, this natural and practically relevant online problem cannot be studied in a meaningful way within a classical competitive analysis as the competitive ratio on worst-case input sequences is lower bounded by Ω(n)\Omega(n). As a worst-case analysis is pointless, we study online independent set in a stochastic analysis. Instead of focussing on a particular stochastic input model, we present a generic sampling approach that enables us to devise online algorithms achieving performance guarantees for a variety of input models. In particular, our analysis covers stochastic input models like the secretary model, in which an adversarial graph is presented in random order, and the prophet-inequality model, in which a randomly generated graph is presented in adversarial order. Our sampling approach bridges thus between stochastic input models of quite different nature. In addition, we show that our approach can be applied to a practically motivated admission control setting. Our sampling approach yields an online algorithm for maximum independent set with competitive ratio O(ρ2)O(\rho^2) with respect to all of the mentioned stochastic input models. for graph classes with inductive independence number ρ\rho. The approach can be extended towards maximum-weight independent set by losing only a factor of O(logn)O(\log n) in the competitive ratio with nn denoting the (expected) number of nodes

    Parameterized temporal exploration problems

    Get PDF
    In this paper we study the fixed-parameter tractability of the problem of deciding whether a given temporal graph G admits a temporal walk that visits all vertices (temporal exploration) or, in some problem variants, a certain subset of the vertices. Formally, a temporal graph is a sequence G = hG1, . . . , GLi of graphs with V (Gt) = V (G) and E(Gt) ⊆ E(G) for all t ∈ [L] and some underlying graph G, and a temporal walk is a timerespecting sequence of edge-traversals. We consider both the strict variant, in which edges must be traversed in strictly increasing timesteps, and the non-strict variant, in which an arbitrary number of edges can be traversed in each timestep. For both variants, we give FPT algorithms for the problem of finding a temporal walk that visits a given set X of vertices, parameterized by |X|, and for the problem of finding a temporal walk that visits at least k distinct vertices in V (G), parameterized by k. We also show W[2]-hardness for a set version of the temporal exploration problem for both variants. For the non-strict variant, we give an FPT algorithm for the temporal exploration problem parameterized by the lifetime of the input graph, and we show that the temporal exploration problem can be solved in polynomial time if the graph in each timestep has at most two connected components

    Packing While Traveling: Mixed Integer Programming for a Class of Nonlinear Knapsack Problems

    Full text link
    Packing and vehicle routing problems play an important role in the area of supply chain management. In this paper, we introduce a non-linear knapsack problem that occurs when packing items along a fixed route and taking into account travel time. We investigate constrained and unconstrained versions of the problem and show that both are NP-hard. In order to solve the problems, we provide a pre-processing scheme as well as exact and approximate mixed integer programming (MIP) solutions. Our experimental results show the effectiveness of the MIP solutions and in particular point out that the approximate MIP approach often leads to near optimal results within far less computation time than the exact approach

    Measurement of cerebral oxygen pressure in living mice by two-photon phosphorescence lifetime microscopy

    Full text link
    The ability to quantify partial pressure of oxygen (pO2) is of primary importance for studies of metabolic processes in health and disease. Here, we present a protocol for imaging of oxygen distributions in tissue and vasculature of the cerebral cortex of anesthetized and awake mice. We describe in vivo two-photon phosphorescence lifetime microscopy (2PLM) of oxygen using the probe Oxyphor 2P. This minimally invasive protocol outperforms existing approaches in terms of accuracy, resolution, and imaging depth

    Graph Reconstruction via Distance Oracles

    Full text link
    We study the problem of reconstructing a hidden graph given access to a distance oracle. We design randomized algorithms for the following problems: reconstruction of a degree bounded graph with query complexity O~(n3/2)\tilde{O}(n^{3/2}); reconstruction of a degree bounded outerplanar graph with query complexity O~(n)\tilde{O}(n); and near-optimal approximate reconstruction of a general graph

    The VPN Conjecture Is True

    Full text link

    Commissioning Status of the Fritz Haber Institute THz FEL

    No full text
    The THz Free-Electron Laser (FEL) at the Fritz Haber Institute (FHI) of the Max Planck Society in Berlin is designed to deliver radiation from 3 to 300 microns using a single-plane-focusing mid-IR undulator and a two-plane-focusing far-IR undulator that acts as a waveguide for the optical mode. A key aspect of the accelerator performance is the low longitudinal emittance, < 50 keV-psec, that is specified to be delivered at 200 pC bunch charge and 50 MeV from a gridded thermionic electron source. We utilize twin accelerating structures separated by a chicane to deliver the required performance over the < 20 - 50 MeV energy range. The first structure operates at near fixed field while the second structure controls the output energy, which, under some conditions, requires running in a decelerating mode. "First Light" is targeted for the centennial of the sponsor in October 2011 and we will describe progress in the commissioning of this device to achieve this goal. Specifically, the measured performance of the accelerated electron beam will be compared to design simulations and the observed matching of the beam to the mid-IR wiggler will be described

    Status of the Fritz Haber Institute THz FEL

    No full text
    The THz FEL at the Fritz Haber Institute (FHI) in Berlin is designed to deliver radiation from 4 to 400 microns. A single-plane-focusing undulator combined with a 5.4 m long cavity is used is the mid-IR (< 50 micron), while a two-plane-focusing undulator in combination with a 7.2 m long cavity with a 1-d waveguide for the optical mode is used for the far-IR. A key aspect of the accelerator performance is low longitudinal emittance, < 50 keV-psec, at 200 pC bunch charge and 50 MeV from a gridded thermionic electron source. We utilize twin accelerating structures separated by a chicane to deliver the required performance over the < 20 - 50 MeV energy range. The first structure operates at near fixed field while the second structure controls the output energy, which, under some conditions, requires running in a decelerating mode. "First Light" is targeted for the centennial of the FHI in October 2011 and we will describe progress in the commissioning of this device. Specifically, the measured performance of the accelerated electron beam will be compared to design simulations and the observed matching of the beam to the mid-IR wiggler will be described
    corecore